This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods.
translated by 谷歌翻译
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/ rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
translated by 谷歌翻译
许多现代的在线3D应用程序和视频游戏都依靠人脸的参数模型来创建可信的化身。但是,用参数模型手动复制某人的面部相似性是困难且耗时的。该任务的机器学习解决方案是非常可取的,但也充满挑战。本文提出了一种新的方法来解决所谓的面对参数问题(简称F2P),旨在重建单个图像的参数面。所提出的方法利用合成数据,域分解和域适应来解决解决F2P的多方面挑战。开源代码库说明了我们的主要观察结果,并提供了定量评估的手段。提出的方法在工业应用中证明是实际的。它提高了准确性并允许更有效的模型培训。这些技术有可能扩展到其他类型的参数模型。
translated by 谷歌翻译
许多现代的在线3D应用程序和视频游戏依靠人面孔的参数模型来创建可信的化身。但是,使用参数模型对某人的面部相似性进行手动复制是困难且耗时的。该任务的机器学习解决方案是非常可取的,但也充满挑战。本文提出了一种新的方法来解决所谓的面对参数问题(简称F2P),旨在重建单个图像的参数面。所提出的方法利用合成数据,域分解和域适应来解决解决F2P的多方面挑战。开源代码库说明了我们的主要观察结果,并提供了定量评估的手段。提出的方法在工业应用中证明是实际的。它提高了准确性并允许更有效的模型培训。这些技术有可能扩展到其他类型的参数模型。
translated by 谷歌翻译